An Additive Sparse Penalty for Variable Selection in High-Dimensional Linear Regression Model

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variable selection in linear regression through adaptive penalty selection

Model selection procedures often use a fixed penalty, such as Mallows’ Cp, to avoid choosing a model which fits a particular data set extremely well. These procedures are often devised to give an unbiased risk estimate when a particular chosen model is used to predict future responses. As a correction for not including the variability induced in model selection, generalized degrees of freedom i...

متن کامل

High-Dimensional Sparse Additive Hazards Regression

High-dimensional sparse modeling with censored survival data is of great practical importance, as exemplified by modern applications in high-throughput genomic data analysis and credit risk analysis. In this article, we propose a class of regularization methods for simultaneous variable selection and estimation in the additive hazards model, by combining the nonconcave penalized likelihood appr...

متن کامل

Grouped variable selection in high dimensional partially linear additive Cox model

In the analysis of survival outcome supplemented with both clinical information and high-dimensional gene expression data, traditional Cox proportional hazard model fails to meet some emerging needs in biological research. First, the number of covariates is generally much larger the sample size. Secondly, predicting an outcome with individual gene expressions is inadequate because a gene’s expr...

متن کامل

A Stepwise Regression Method and Consistent Model Selection for High-dimensional Sparse Linear Models

We introduce a fast stepwise regression method, called the orthogonal greedy algorithm (OGA), that selects input variables to enter a p-dimensional linear regression model (with p À n, the sample size) sequentially so that the selected variable at each step minimizes the residual sum squares. We derive the convergence rate of OGA and develop a consistent model selection procedure along the OGA ...

متن کامل

Nearly Optimal Minimax Estimator for High Dimensional Sparse Linear Regression

We present estimators for a well studied statistical estimation problem: the estimation for the linear regression model with soft sparsity constraints (`q constraint with 0 < q ≤ 1) in the high-dimensional setting. We first present a family of estimators, called the projected nearest neighbor estimator and show, by using results from Convex Geometry, that such estimator is within a logarithmic ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Communications for Statistical Applications and Methods

سال: 2015

ISSN: 2383-4757

DOI: 10.5351/csam.2015.22.2.147